# Low-precision and efficient inference
Mistral Small 3.2 24B Instruct 2506 Bf16
Apache-2.0
This is an MLX format model converted from Mistral-Small-3.2-24B-Instruct-2506, suitable for instruction following tasks.
Large Language Model Supports Multiple Languages
M
mlx-community
163
1
Deepseek R1 GGUF
MIT
DeepSeek-R1 is a 1.58-bit dynamically quantized large language model optimized by Unsloth, adopting the MoE architecture and supporting English task processing.
Large Language Model English
D
unsloth
2.0M
1,045
Featured Recommended AI Models